Source of the materials: Biopython cookbook (adapted) Status: Draft

Cluster analysis

Cluster analysis is the grouping of items into clusters based on the similarity of the items to each other. In bioinformatics, clustering is widely used in gene expression data analysis to find groups of genes with similar gene expression profiles. This may identify functionally related genes, as well as suggest the function of presently unknown genes.

The Biopython module Bio.Cluster provides commonly used clustering algorithms and was designed with the application to gene expression data in mind. However, this module can also be used for cluster analysis of other types of data. Bio.Cluster and the underlying C Clustering Library is described by De Hoon et al. @dehoon2004.

The following four clustering approaches are implemented in Bio.Cluster:

  • Hierarchical clustering (pairwise centroid-, single-, complete-, and average-linkage);
  • \(k\)-means, \(k\)-medians, and \(k\)-medoids clustering;
  • Self-Organizing Maps;
  • Principal Component Analysis.

Data representation

The data to be clustered are represented by a \(n \times m\) Numerical Python array data. Within the context of gene expression data clustering, typically the rows correspond to different genes whereas the columns correspond to different experimental conditions. The clustering algorithms in Bio.Cluster can be applied both to rows (genes) and to columns (experiments).

Missing values

Often in microarray experiments, some of the data values are missing, which is indicated by an additional \(n \times m\) Numerical Python integer array mask. If mask[i,j]==0, then data[i,j] is missing and is ignored in the analysis.

Random number generator

The \(k\)-means/medians/medoids clustering algorithms and Self-Organizing Maps (SOMs) include the use of a random number generator. The uniform random number generator in Bio.Cluster is based on the algorithm by L’Ecuyer @lecuyer1988, while random numbers following the binomial distribution are generated using the BTPE algorithm by Kachitvichyanukul and Schmeiser @kachitvichyanukul1988. The random number generator is initialized automatically during its first call. As this random number generator uses a combination of two multiplicative linear congruential generators, two (integer) seeds are needed for initialization, for which we use the system-supplied random number generator rand (in the C standard library). We initialize this generator by calling srand with the epoch time in seconds, and use the first two random numbers generated by rand as seeds for the uniform random number generator in Bio.Cluster.

Distance functions

In order to cluster items into groups based on their similarity, we should first define what exactly we mean by similar. Bio.Cluster provides eight distance functions, indicated by a single character, to measure similarity, or conversely, distance:

  • 'e': Euclidean distance;
  • 'b': City-block distance.
  • 'c': Pearson correlation coefficient;
  • 'a': Absolute value of the Pearson correlation coefficient;
  • 'u': Uncentered Pearson correlation (equivalent to the cosine of the angle between two data vectors);
  • 'x': Absolute uncentered Pearson correlation;
  • 's': Spearman’s rank correlation;
  • 'k': Kendall’s \(\tau\).

The first two are true distance functions that satisfy the triangle inequality:

\[ \begin{align}\begin{aligned}d\left(\underline{u},\underline{v}\right) \leq d\left(\underline{u},\underline{w}\right) + d\left(\underline{w},\underline{v}\right) \textrm{ for all } \underline{u}, \underline{v}, \underline{w},\\and are therefore refered to as *metrics*. In everyday language, this\end{aligned}\end{align} \]

means that the shortest distance between two points is a straight line.

The remaining six distance measures are related to the correlation coefficient, where the distance \(d\) is defined in terms of the correlation \(r\) by \(d=1-r\). Note that these distance functions are semi-metrics that do not satisfy the triangle inequality. For example, for

\[\underline{u}=\left(1,0,-1\right);\]
\[\underline{v}=\left(1,1,0\right);\]
\[ \begin{align}\begin{aligned}\underline{w}=\left(0,1,1\right);\\we find a Pearson distance\end{aligned}\end{align} \]

\(d\left(\underline{u},\underline{w}\right) = 1.8660\), while \(d\left(\underline{u},\underline{v}\right)+d\left(\underline{v},\underline{w}\right) = 1.6340\).

Euclidean distance

In Bio.Cluster, we define the Euclidean distance as

\[d = {1 \over n} \sum_{i=1}^{n} \left(x_i-y_i\right)^{2}.\]

Only those terms are included in the summation for which both \(x_i\) and \(y_i\) are present, and the denominator \(n\) is chosen accordingly. As the expression data \(x_i\) and \(y_i\) are subtracted directly from each other, we should make sure that the expression data are properly normalized when using the Euclidean distance.

City-block distance

The city-block distance, alternatively known as the Manhattan distance, is related to the Euclidean distance. Whereas the Euclidean distance corresponds to the length of the shortest path between two points, the city-block distance is the sum of distances along each dimension. As gene expression data tend to have missing values, in Bio.Cluster we define the city-block distance as the sum of distances divided by the number of dimensions:

\[d = {1 \over n} \sum_{i=1}^n \left|x_i-y_i\right|.\]

This is equal to the distance you would have to walk between two points in a city, where you have to walk along city blocks. As for the Euclidean distance, the expression data are subtracted directly from each other, and we should therefore make sure that they are properly normalized.

The Pearson correlation coefficient

The Pearson correlation coefficient is defined as

\[ \begin{align}\begin{aligned}r = \frac{1}{n} \sum_{i=1}^n \left( \frac{x_i -\bar{x}}{\sigma_x} \right) \left(\frac{y_i -\bar{y}}{\sigma_y} \right),\\in which :math:`\bar{x}, \bar{y}` are the sample mean of :math:`x` and\end{aligned}\end{align} \]

\(y\) respectively, and \(\sigma_x, \sigma_y\) are the sample standard deviation of \(x\) and \(y\). The Pearson correlation coefficient is a measure for how well a straight line can be fitted to a scatterplot of \(x\) and \(y\). If all the points in the scatterplot lie on a straight line, the Pearson correlation coefficient is either +1 or -1, depending on whether the slope of line is positive or negative. If the Pearson correlation coefficient is equal to zero, there is no correlation between \(x\) and \(y\).

The Pearson distance is then defined as

\[d_{\textrm{P}} \equiv 1 - r.\]

As the Pearson correlation coefficient lies between -1 and 1, the Pearson distance lies between 0 and 2.

Absolute Pearson correlation

By taking the absolute value of the Pearson correlation, we find a number between 0 and 1. If the absolute value is 1, all the points in the scatter plot lie on a straight line with either a positive or a negative slope. If the absolute value is equal to zero, there is no correlation between \(x\) and \(y\).

The corresponding distance is defined as

\[d_{\textrm A} \equiv 1 - \left|r\right|,\]

where \(r\) is the Pearson correlation coefficient. As the absolute value of the Pearson correlation coefficient lies between 0 and 1, the corresponding distance lies between 0 and 1 as well.

In the context of gene expression experiments, the absolute correlation is equal to 1 if the gene expression profiles of two genes are either exactly the same or exactly opposite. The absolute correlation coefficient should therefore be used with care.

Uncentered correlation (cosine of the angle)

In some cases, it may be preferable to use the uncentered correlation instead of the regular Pearson correlation coefficient. The uncentered correlation is defined as

\[ \begin{align}\begin{aligned}r_{\textrm U} = \frac{1}{n} \sum_{i=1}^{n} \left(\frac{x_i}{\sigma_x^{(0)}} \right) \left(\frac{y_i}{\sigma_y^{(0)}} \right),\\where\end{aligned}\end{align} \]
\[ \begin{align}\begin{aligned}\begin{split} \begin{aligned} \sigma_x^{(0)} & = & \sqrt{{\frac{1}{n}} \sum_{i=1}^{n}x_i^2}; \nonumber \\ \sigma_y^{(0)} & = & \sqrt{{\frac{1}{n}} \sum_{i=1}^{n}y_i^2}. \nonumber\end{aligned}\end{split}\\This is the same expression as for the regular Pearson correlation\end{aligned}\end{align} \]

coefficient, except that the sample means \(\bar{x}, \bar{y}\) are set equal to zero. The uncentered correlation may be appropriate if there is a zero reference state. For instance, in the case of gene expression data given in terms of log-ratios, a log-ratio equal to zero corresponds to the green and red signal being equal, which means that the experimental manipulation did not affect the gene expression.

The distance corresponding to the uncentered correlation coefficient is defined as

\[d_{\mbox{U}} \equiv 1 - r_{\mbox{U}},\]

where \(r_{\mbox{U}}\) is the uncentered correlation. As the uncentered correlation coefficient lies between -1 and 1, the corresponding distance lies between 0 and 2.

The uncentered correlation is equal to the cosine of the angle of the two data vectors in \(n\)-dimensional space, and is often referred to as such.

Absolute uncentered correlation

As for the regular Pearson correlation, we can define a distance measure using the absolute value of the uncentered correlation:

\[d_{\mbox{AU}} \equiv 1 - \left|r_{\mbox{U}}\right|,\]

where \(r_{\mbox{U}}\) is the uncentered correlation coefficient. As the absolute value of the uncentered correlation coefficient lies between 0 and 1, the corresponding distance lies between 0 and 1 as well.

Geometrically, the absolute value of the uncentered correlation is equal to the cosine between the supporting lines of the two data vectors (i.e., the angle without taking the direction of the vectors into consideration).

Spearman rank correlation

The Spearman rank correlation is an example of a non-parametric similarity measure, and tends to be more robust against outliers than the Pearson correlation.

To calculate the Spearman rank correlation, we replace each data value by their rank if we would order the data in each vector by their value. We then calculate the Pearson correlation between the two rank vectors instead of the data vectors.

As in the case of the Pearson correlation, we can define a distance measure corresponding to the Spearman rank correlation as

\[d_{\mbox{S}} \equiv 1 - r_{\mbox{S}},\]

where \(r_{\mbox{S}}\) is the Spearman rank correlation.

Kendall’s \(\tau\)

Kendall’s \(\tau\) is another example of a non-parametric similarity measure. It is similar to the Spearman rank correlation, but instead of the ranks themselves only the relative ranks are used to calculate \(\tau\) (see Snedecor & Cochran @snedecor1989).

We can define a distance measure corresponding to Kendall’s \(\tau\) as

\[d_{\mbox{K}} \equiv 1 - \tau.\]

As Kendall’s \(\tau\) is always between -1 and 1, the corresponding distance will be between 0 and 2.

Weighting

For most of the distance functions available in Bio.Cluster, a weight vector can be applied. The weight vector contains weights for the items in the data vector. If the weight for item \(i\) is \(w_i\), then that item is treated as if it occurred \(w_i\) times in the data. The weight do not have to be integers. For the Spearman rank correlation and Kendall’s \(\tau\), weights do not have a well-defined meaning and are therefore not implemented.

Calculating the distance matrix

The distance matrix is a square matrix with all pairwise distances between the items in data, and can be calculated by the function distancematrix in the Bio.Cluster module:

In [1]:
from Bio.Cluster import distancematrix
matrix = distancematrix(data)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-1-42e78a39e140> in <module>()
      1 from Bio.Cluster import distancematrix
----> 2 matrix = distancematrix(data)

NameError: name 'data' is not defined

where the following arguments are defined:

  • data (required)
    Array containing the data for the items.
  • mask (default: None)
    Array of integers showing which data are missing. If mask[i,j]==0, then data[i,j] is missing. If mask==None, then all data are present.
  • weight (default: None)
    The weights to be used when calculating distances. If weight==None, then equal weights are assumed.
  • transpose (default: 0)
    Determines if the distances between the rows of data are to be calculated (transpose==0), or between the columns of data (transpose==1).
  • dist (default: 'e', Euclidean distance)
    Defines the distance function to be used (see [sec:distancefunctions]).

To save memory, the distance matrix is returned as a list of 1D arrays. The number of columns in each row is equal to the row number. Hence, the first row has zero elements. An example of the return value is

[array([]),
 array([1.]),
 array([7., 3.]),
 array([4., 2., 6.])]

This corresponds to the distance matrix

\[\begin{split}\left( \begin{array}{cccc} 0 & 1 & 7 & 4 \\ 1 & 0 & 3 & 2 \\ 7 & 3 & 0 & 6 \\ 4 & 2 & 6 & 0 \end{array} \right).\end{split}\]

Calculating cluster properties

Calculating the cluster centroids

The centroid of a cluster can be defined either as the mean or as the median of each dimension over all cluster items. The function clustercentroids in Bio.Cluster can be used to calculate either:

In [2]:
from Bio.Cluster import clustercentroids
cdata, cmask = clustercentroids(data)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-2-866b8da2b76e> in <module>()
      1 from Bio.Cluster import clustercentroids
----> 2 cdata, cmask = clustercentroids(data)

NameError: name 'data' is not defined

where the following arguments are defined:

  • data (required)
    Array containing the data for the items.
  • mask (default: None)
    Array of integers showing which data are missing. If mask[i,j]==0, then data[i,j] is missing. If mask==None, then all data are present.
  • clusterid (default: None)
    Vector of integers showing to which cluster each item belongs. If clusterid is None, then all items are assumed to belong to the same cluster.
  • method (default: 'a')
    Specifies whether the arithmetic mean (method=='a') or the median (method=='m') is used to calculate the cluster center.
  • transpose (default: 0)
    Determines if the centroids of the rows of data are to be calculated (transpose==0), or the centroids of the columns of data (transpose==1).

This function returns the tuple (cdata, cmask). The centroid data are stored in the 2D Numerical Python array cdata, with missing data indicated by the 2D Numerical Python integer array cmask. The dimensions of these arrays are \(\left(\textrm{number of clusters}, \textrm{number of columns}\right)\) if transpose is 0, or \(\left(\textrm{number of rows}, \textrm{number of clusters}\right)\) if transpose is 1. Each row (if transpose is 0) or column (if transpose is 1) contains the averaged data corresponding to the centroid of each cluster.

Calculating the distance between clusters

Given a distance function between items, we can define the distance between two clusters in several ways. The distance between the arithmetic means of the two clusters is used in pairwise centroid-linkage clustering and in \(k\)-means clustering. In \(k\)-medoids clustering, the distance between the medians of the two clusters is used instead. The shortest pairwise distance between items of the two clusters is used in pairwise single-linkage clustering, while the longest pairwise distance is used in pairwise maximum-linkage clustering. In pairwise average-linkage clustering, the distance between two clusters is defined as the average over the pairwise distances.

To calculate the distance between two clusters, use

In [3]:
from Bio.Cluster import clusterdistance
distance = clusterdistance(data)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-3-7af12629a96b> in <module>()
      1 from Bio.Cluster import clusterdistance
----> 2 distance = clusterdistance(data)

NameError: name 'data' is not defined

where the following arguments are defined:

  • data (required)
    Array containing the data for the items.
  • mask (default: None)
    Array of integers showing which data are missing. If mask[i,j]==0, then data[i,j] is missing. If mask==None, then all data are present.
  • weight (default: None)
    The weights to be used when calculating distances. If weight==None, then equal weights are assumed.
  • index1 (default: 0)
    A list containing the indices of the items belonging to the first cluster. A cluster containing only one item \(i\) can be represented either as a list [i], or as an integer i.
  • index2 (default: 0)
    A list containing the indices of the items belonging to the second cluster. A cluster containing only one items \(i\) can be represented either as a list [i], or as an integer i.
  • method (default: 'a')
    Specifies how the distance between clusters is defined:
    • 'a': Distance between the two cluster centroids (arithmetic mean);
    • 'm': Distance between the two cluster centroids (median);
    • 's': Shortest pairwise distance between items in the two clusters;
    • 'x': Longest pairwise distance between items in the two clusters;
    • 'v': Average over the pairwise distances between items in the two clusters.
  • dist (default: 'e', Euclidean distance)
    Defines the distance function to be used (see [sec:distancefunctions]).
  • transpose (default: 0)
    If transpose==0, calculate the distance between the rows of data. If transpose==1, calculate the distance between the columns of data.

Partitioning algorithms

Partitioning algorithms divide items into \(k\) clusters such that the sum of distances over the items to their cluster centers is minimal. The number of clusters \(k\) is specified by the user. Three partitioning algorithms are available in Bio.Cluster:

  • \(k\)-means clustering
  • \(k\)-medians clustering
  • \(k\)-medoids clustering

These algorithms differ in how the cluster center is defined. In \(k\)-means clustering, the cluster center is defined as the mean data vector averaged over all items in the cluster. Instead of the mean, in \(k\)-medians clustering the median is calculated for each dimension in the data vector. Finally, in \(k\)-medoids clustering the cluster center is defined as the item which has the smallest sum of distances to the other items in the cluster. This clustering algorithm is suitable for cases in which the distance matrix is known but the original data matrix is not available, for example when clustering proteins based on their structural similarity.

The expectation-maximization (EM) algorithm is used to find this partitioning into \(k\) groups. In the initialization of the EM algorithm, we randomly assign items to clusters. To ensure that no empty clusters are produced, we use the binomial distribution to randomly choose the number of items in each cluster to be one or more. We then randomly permute the cluster assignments to items such that each item has an equal probability to be in any cluster. Each cluster is thus guaranteed to contain at least one item.

We then iterate:

  • Calculate the centroid of each cluster, defined as either the mean, the median, or the medoid of the cluster;
  • Calculate the distances of each item to the cluster centers;
  • For each item, determine which cluster centroid is closest;
  • Reassign each item to its closest cluster, or stop the iteration if no further item reassignments take place.

To avoid clusters becoming empty during the iteration, in \(k\)-means and \(k\)-medians clustering the algorithm keeps track of the number of items in each cluster, and prohibits the last remaining item in a cluster from being reassigned to a different cluster. For \(k\)-medoids clustering, such a check is not needed, as the item that functions as the cluster centroid has a zero distance to itself, and will therefore never be closer to a different cluster.

As the initial assignment of items to clusters is done randomly, usually a different clustering solution is found each time the EM algorithm is executed. To find the optimal clustering solution, the \(k\)-means algorithm is repeated many times, each time starting from a different initial random clustering. The sum of distances of the items to their cluster center is saved for each run, and the solution with the smallest value of this sum will be returned as the overall clustering solution.

How often the EM algorithm should be run depends on the number of items being clustered. As a rule of thumb, we can consider how often the optimal solution was found; this number is returned by the partitioning algorithms as implemented in this library. If the optimal solution was found many times, it is unlikely that better solutions exist than the one that was found. However, if the optimal solution was found only once, there may well be other solutions with a smaller within-cluster sum of distances. If the number of items is large (more than several hundreds), it may be difficult to find the globally optimal solution.

The EM algorithm terminates when no further reassignments take place. We noticed that for some sets of initial cluster assignments, the EM algorithm fails to converge due to the same clustering solution reappearing periodically after a small number of iteration steps. We therefore check for the occurrence of such periodic solutions during the iteration. After a given number of iteration steps, the current clustering result is saved as a reference. By comparing the clustering result after each subsequent iteration step to the reference state, we can determine if a previously encountered clustering result is found. In such a case, the iteration is halted. If after a given number of iterations the reference state has not yet been encountered, the current clustering solution is saved to be used as the new reference state. Initially, ten iteration steps are executed before resaving the reference state. This number of iteration steps is doubled each time, to ensure that periodic behavior with longer periods can also be detected.

\(k\)-means and \(k\)-medians

The \(k\)-means and \(k\)-medians algorithms are implemented as the function kcluster in Bio.Cluster:

In [4]:
from Bio.Cluster import kcluster
clusterid, error, nfound = kcluster(data)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-4-c0b4d185df6f> in <module>()
      1 from Bio.Cluster import kcluster
----> 2 clusterid, error, nfound = kcluster(data)

NameError: name 'data' is not defined

where the following arguments are defined:

  • data (required)
    Array containing the data for the items.
  • nclusters (default: 2)
    The number of clusters \(k\).
  • mask (default: None)
    Array of integers showing which data are missing. If mask[i,j]==0, then data[i,j] is missing. If mask==None, then all data are present.
  • weight (default: None)
    The weights to be used when calculating distances. If weight==None, then equal weights are assumed.
  • transpose (default: 0)
    Determines if rows (transpose is 0) or columns (transpose is 1) are to be clustered.
  • npass (default: 1)
    The number of times the \(k\)-means/-medians clustering algorithm is performed, each time with a different (random) initial condition. If initialid is given, the value of npass is ignored and the clustering algorithm is run only once, as it behaves deterministically in that case.
  • method (default: a)
    describes how the center of a cluster is found:
    • method=='a': arithmetic mean (\(k\)-means clustering);
    • method=='m': median (\(k\)-medians clustering).

    For other values of method, the arithmetic mean is used.

  • dist (default: 'e', Euclidean distance)
    Defines the distance function to be used (see [sec:distancefunctions]). Whereas all eight distance measures are accepted by kcluster, from a theoretical viewpoint it is best to use the Euclidean distance for the \(k\)-means algorithm, and the city-block distance for \(k\)-medians.
  • initialid (default: None)
    Specifies the initial clustering to be used for the EM algorithm. If initialid==None, then a different random initial clustering is used for each of the npass runs of the EM algorithm. If initialid is not None, then it should be equal to a 1D array containing the cluster number (between 0 and nclusters-1) for each item. Each cluster should contain at least one item. With the initial clustering specified, the EM algorithm is deterministic.

This function returns a tuple (clusterid, error, nfound), where clusterid is an integer array containing the number of the cluster to which each row or cluster was assigned, error is the within-cluster sum of distances for the optimal clustering solution, and nfound is the number of times this optimal solution was found.

\(k\)-medoids clustering

The kmedoids routine performs \(k\)-medoids clustering on a given set of items, using the distance matrix and the number of clusters passed by the user:

In [5]:
from Bio.Cluster import kmedoids
clusterid, error, nfound = kmedoids(distance)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-5-6cf17eb1aebc> in <module>()
      1 from Bio.Cluster import kmedoids
----> 2 clusterid, error, nfound = kmedoids(distance)

NameError: name 'distance' is not defined

where the following arguments are defined: , nclusters=2, npass=1, initialid=None)|

  • distance (required)
    The matrix containing the distances between the items; this matrix can be specified in three ways:
    • as a 2D Numerical Python array (in which only the left-lower part of the array will be accessed):
distance = array([[0.0, 1.1, 2.3],
                          [1.1, 0.0, 4.5],
                          [2.3, 4.5, 0.0]])
-   as a 1D Numerical Python array containing consecutively the
    distances in the left-lower part of the distance matrix:
distance = array([1.1, 2.3, 4.5])
-   as a list containing the rows of the left-lower part of the
    distance matrix:
distance = [array([]|,
                    array([1.1]),
                    array([2.3, 4.5])
                   ]
These three expressions correspond to the same distance matrix.
  • nclusters (default: 2)
    The number of clusters \(k\).
  • npass (default: 1)
    The number of times the \(k\)-medoids clustering algorithm is performed, each time with a different (random) initial condition. If initialid is given, the value of npass is ignored, as the clustering algorithm behaves deterministically in that case.
  • initialid (default: None)
    Specifies the initial clustering to be used for the EM algorithm. If initialid==None, then a different random initial clustering is used for each of the npass runs of the EM algorithm. If initialid is not None, then it should be equal to a 1D array containing the cluster number (between 0 and nclusters-1) for each item. Each cluster should contain at least one item. With the initial clustering specified, the EM algorithm is deterministic.

This function returns a tuple (clusterid, error, nfound), where clusterid is an array containing the number of the cluster to which each item was assigned, error is the within-cluster sum of distances for the optimal \(k\)-medoids clustering solution, and nfound is the number of times the optimal solution was found. Note that the cluster number in clusterid is defined as the item number of the item representing the cluster centroid.

Hierarchical clustering

Hierarchical clustering methods are inherently different from the \(k\)-means clustering method. In hierarchical clustering, the similarity in the expression profile between genes or experimental conditions are represented in the form of a tree structure. This tree structure can be shown graphically by programs such as Treeview and Java Treeview, which has contributed to the popularity of hierarchical clustering in the analysis of gene expression data.

The first step in hierarchical clustering is to calculate the distance matrix, specifying all the distances between the items to be clustered. Next, we create a node by joining the two closest items. Subsequent nodes are created by pairwise joining of items or nodes based on the distance between them, until all items belong to the same node. A tree structure can then be created by retracing which items and nodes were merged. Unlike the EM algorithm, which is used in \(k\)-means clustering, the complete process of hierarchical clustering is deterministic.

Several flavors of hierarchical clustering exist, which differ in how the distance between subnodes is defined in terms of their members. In Bio.Cluster, pairwise single, maximum, average, and centroid linkage are available.

  • In pairwise single-linkage clustering, the distance between two nodes is defined as the shortest distance among the pairwise distances between the members of the two nodes.
  • In pairwise maximum-linkage clustering, alternatively known as pairwise complete-linkage clustering, the distance between two nodes is defined as the longest distance among the pairwise distances between the members of the two nodes.
  • In pairwise average-linkage clustering, the distance between two nodes is defined as the average over all pairwise distances between the items of the two nodes.
  • In pairwise centroid-linkage clustering, the distance between two nodes is defined as the distance between their centroids. The centroids are calculated by taking the mean over all the items in a cluster. As the distance from each newly formed node to existing nodes and items need to be calculated at each step, the computing time of pairwise centroid-linkage clustering may be significantly longer than for the other hierarchical clustering methods. Another peculiarity is that (for a distance measure based on the Pearson correlation), the distances do not necessarily increase when going up in the clustering tree, and may even decrease. This is caused by an inconsistency between the centroid calculation and the distance calculation when using the Pearson correlation: Whereas the Pearson correlation effectively normalizes the data for the distance calculation, no such normalization occurs for the centroid calculation.

For pairwise single-, complete-, and average-linkage clustering, the distance between two nodes can be found directly from the distances between the individual items. Therefore, the clustering algorithm does not need access to the original gene expression data, once the distance matrix is known. For pairwise centroid-linkage clustering, however, the centroids of newly formed subnodes can only be calculated from the original data and not from the distance matrix.

The implementation of pairwise single-linkage hierarchical clustering is based on the SLINK algorithm (R. Sibson, 1973), which is much faster and more memory-efficient than a straightforward implementation of pairwise single-linkage clustering. The clustering result produced by this algorithm is identical to the clustering solution found by the conventional single-linkage algorithm. The single-linkage hierarchical clustering algorithm implemented in this library can be used to cluster large gene expression data sets, for which conventional hierarchical clustering algorithms fail due to excessive memory requirements and running time.

Representing a hierarchical clustering solution

The result of hierarchical clustering consists of a tree of nodes, in which each node joins two items or subnodes. Usually, we are not only interested in which items or subnodes are joined at each node, but also in their similarity (or distance) as they are joined. To store one node in the hierarchical clustering tree, we make use of the class Node, which defined in Bio.Cluster. An instance of Node has three attributes:

  • left
  • right
  • distance

Here, left and right are integers referring to the two items or subnodes that are joined at this node, and distance is the distance between them. The items being clustered are numbered from 0 to \(\left(\textrm{number of items} - 1\right)\), while clusters are numbered from -1 to \(-\left(\textrm{number of items}-1\right)\). Note that the number of nodes is one less than the number of items.

To create a new Node object, we need to specify left and right; distance is optional.

In [6]:
from Bio.Cluster import Node
Node(2, 3)
Out[6]:
(2, 3): 0
In [7]:
Node(2, 3, 0.91)
Out[7]:
(2, 3): 0.91

The attributes left, right, and distance of an existing Node object can be modified directly:

In [8]:
node = Node(4, 5)
node.left = 6
node.right = 2
node.distance = 0.73
node
Out[8]:
(6, 2): 0.73

An error is raised if left and right are not integers, or if distance cannot be converted to a floating-point value.

The Python class Tree represents a full hierarchical clustering solution. A Tree object can be created from a list of Node objects:

In [9]:
from Bio.Cluster import Node, Tree
nodes = [Node(1, 2, 0.2), Node(0, 3, 0.5), Node(-2, 4, 0.6), Node(-1, -3, 0.9)]
tree = Tree(nodes)
print(tree)
(1, 2): 0.2
(0, 3): 0.5
(-2, 4): 0.6
(-1, -3): 0.9

The Tree initializer checks if the list of nodes is a valid hierarchical clustering result:

In [11]:
nodes = [Node(1, 2, 0.2), Node(0, 2, 0.5)]
try:
    Tree(nodes)
    raise Exception("Should not arrive here")
except ValueError:
    print("This tree is problematic")
This tree is problematic

Individual nodes in a Tree object can be accessed using square brackets:

In [12]:
nodes = [Node(1, 2, 0.2), Node(0, -1, 0.5)]
tree = Tree(nodes)
tree[0]
Out[12]:
(1, 2): 0.2
In [13]:
tree[1]
Out[13]:
(0, -1): 0.5
In [14]:
tree[-1]
Out[14]:
(0, -1): 0.5

As a Tree object is read-only, we cannot change individual nodes in a Tree object. However, we can convert the tree to a list of nodes, modify this list, and create a new tree from this list:

In [15]:
tree = Tree([Node(1, 2, 0.1), Node(0, -1, 0.5), Node(-2, 3, 0.9)])
print(tree)
(1, 2): 0.1
(0, -1): 0.5
(-2, 3): 0.9
In [16]:
nodes = tree[:]
nodes[0] = Node(0, 1, 0.2)
nodes[1].left = 2
tree = Tree(nodes)
print(tree)
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-16-b8ceb34eb030> in <module>()
----> 1 nodes = tree[:]
      2 nodes[0] = Node(0, 1, 0.2)
      3 nodes[1].left = 2
      4 tree = Tree(nodes)
      5 print(tree)

TypeError: sequence index must be integer, not 'slice'

This guarantees that any Tree object is always well-formed.

To display a hierarchical clustering solution with visualization programs such as Java Treeview, it is better to scale all node distances such that they are between zero and one. This can be accomplished by calling the scale method on an existing Tree object:

In [12]:
tree.scale()

This method takes no arguments, and returns None.

After hierarchical clustering, the items can be grouped into \(k\) clusters based on the tree structure stored in the Tree object by cutting the tree:

In [13]:
clusterid = tree.cut(nclusters=1)
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-13-633abf1755ad> in <module>()
----> 1 clusterid = tree.cut(nclusters=1)

TypeError: cut() takes no keyword arguments

where nclusters (defaulting to 1) is the desired number of clusters \(k\). This method ignores the top \(k-1\) linking events in the tree structure, resulting in \(k\) separated clusters of items. The number of clusters \(k\) should be positive, and less than or equal to the number of items. This method returns an array clusterid containing the number of the cluster to which each item is assigned.

Performing hierarchical clustering

To perform hierarchical clustering, use the treecluster function in Bio.Cluster.

In [14]:
from Bio.Cluster import treecluster
tree = treecluster(data)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-14-1233e37a5203> in <module>()
      1 from Bio.Cluster import treecluster
----> 2 tree = treecluster(data)

NameError: name 'data' is not defined

where the following arguments are defined:

  • data
    Array containing the data for the items.
  • mask (default: None)
    Array of integers showing which data are missing. If mask[i,j]==0, then data[i,j] is missing. If mask==None, then all data are present.
  • weight (default: None)
    The weights to be used when calculating distances. If weight==None, then equal weights are assumed.
  • transpose (default: 0)
    Determines if rows (transpose==0) or columns (transpose==1) are to be clustered.
  • method (default: 'm')
    defines the linkage method to be used:
    • method=='s': pairwise single-linkage clustering
    • method=='m': pairwise maximum- (or complete-) linkage clustering
    • method=='c': pairwise centroid-linkage clustering
    • method=='a': pairwise average-linkage clustering
  • dist (default: 'e', Euclidean distance)
    Defines the distance function to be used (see [sec:distancefunctions]).

To apply hierarchical clustering on a precalculated distance matrix, specify the distancematrix argument when calling treecluster function instead of the data argument:

In [17]:
from Bio.Cluster import treecluster
tree = treecluster(distancematrix=distance)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-17-f938aacc6e43> in <module>()
      1 from Bio.Cluster import treecluster
----> 2 tree = treecluster(distancematrix=distance)

NameError: name 'distance' is not defined

In this case, the following arguments are defined:

  • distancematrix
    The distance matrix, which can be specified in three ways:
    • as a 2D Numerical Python array (in which only the left-lower part of the array will be accessed):
distance = array([[0.0, 1.1, 2.3],
                          [1.1, 0.0, 4.5],
                          [2.3, 4.5, 0.0]])
-   as a 1D Numerical Python array containing consecutively the
    distances in the left-lower part of the distance matrix:
distance = array([1.1, 2.3, 4.5])
-   as a list containing the rows of the left-lower part of the
    distance matrix:
distance = [array([]),
                    array([1.1]),
                    array([2.3, 4.5])
These three expressions correspond to the same distance matrix. As
`treecluster` may shuffle the values in the distance matrix as part
of the clustering algorithm, be sure to save this array in a
different variable before calling `treecluster` if you need
it later.
  • method
    The linkage method to be used:
    • method=='s': pairwise single-linkage clustering
    • method=='m': pairwise maximum- (or complete-) linkage clustering
    • method=='a': pairwise average-linkage clustering

    While pairwise single-, maximum-, and average-linkage clustering can be calculated from the distance matrix alone, pairwise centroid-linkage cannot.

When calling treecluster, either data or distancematrix should be None.

This function returns a Tree object. This object contains \(\left(\textrm{number of items} - 1\right)\) nodes, where the number of items is the number of rows if rows were clustered, or the number of columns if columns were clustered. Each node describes a pairwise linking event, where the node attributes left and right each contain the number of one item or subnode, and distance the distance between them. Items are numbered from 0 to \(\left(\textrm{number of items} - 1\right)\), while clusters are numbered -1 to \(-\left(\textrm{number of items}-1\right)\).

Self-Organizing Maps

Self-Organizing Maps (SOMs) were invented by Kohonen to describe neural networks (see for instance Kohonen, 1997 @kohonen1997). Tamayo (1999) first applied Self-Organizing Maps to gene expression data @tamayo1999.

SOMs organize items into clusters that are situated in some topology. Usually a rectangular topology is chosen. The clusters generated by SOMs are such that neighboring clusters in the topology are more similar to each other than clusters far from each other in the topology.

The first step to calculate a SOM is to randomly assign a data vector to each cluster in the topology. If rows are being clustered, then the number of elements in each data vector is equal to the number of columns.

An SOM is then generated by taking rows one at a time, and finding which cluster in the topology has the closest data vector. The data vector of that cluster, as well as those of the neighboring clusters, are adjusted using the data vector of the row under consideration. The adjustment is given by

\[ \begin{align}\begin{aligned}\Delta \underline{x}_{\textrm{cell}} = \tau \cdot \left(\underline{x}_{\textrm{row}} - \underline{x}_{\textrm{cell}} \right).\\The parameter :math:`\tau` is a parameter that decreases at each\end{aligned}\end{align} \]

iteration step. We have used a simple linear function of the iteration step:

\[ \begin{align}\begin{aligned}\tau = \tau_{\textrm{init}} \cdot \left(1 - {i \over n}\right),\\:math:`\tau_{\textrm{init}}` is the initial value of :math:`\tau` as\end{aligned}\end{align} \]

specified by the user, \(i\) is the number of the current iteration step, and \(n\) is the total number of iteration steps to be performed. While changes are made rapidly in the beginning of the iteration, at the end of iteration only small changes are made.

All clusters within a radius \(R\) are adjusted to the gene under consideration. This radius decreases as the calculation progresses as

\[R = R_{\textrm{max}} \cdot \left(1 - {i \over n}\right),\]

in which the maximum radius is defined as

\[ \begin{align}\begin{aligned}R_{\textrm{max}} = \sqrt{N_x^2 + N_y^2},\\where :math:`\left(N_x, N_y\right)` are the dimensions of the rectangle\end{aligned}\end{align} \]

defining the topology.

The function somcluster implements the complete algorithm to calculate a Self-Organizing Map on a rectangular grid. First it initializes the random number generator. The node data are then initialized using the random number generator. The order in which genes or microarrays are used to modify the SOM is also randomized. The total number of iterations in the SOM algorithm is specified by the user.

To run somcluster, use

In [19]:
from Bio.Cluster import somcluster
clusterid, celldata = somcluster(data)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-19-24fcbff7f835> in <module>()
      1 from Bio.Cluster import somcluster
----> 2 clusterid, celldata = somcluster(data)

NameError: name 'data' is not defined

where the following arguments are defined:

  • data (required)
    Array containing the data for the items.
  • mask (default: None)
    Array of integers showing which data are missing. If mask[i,j]==0, then data[i,j] is missing. If mask==None, then all data are present.
  • weight (default: None)
    contains the weights to be used when calculating distances. If weight==None, then equal weights are assumed.
  • transpose (default: 0)
    Determines if rows (transpose is 0) or columns (transpose is 1) are to be clustered.
  • nxgrid, nygrid (default: 2, 1)
    The number of cells horizontally and vertically in the rectangular grid on which the Self-Organizing Map is calculated.
  • inittau (default: 0.02)
    The initial value for the parameter \(\tau\) that is used in the SOM algorithm. The default value for inittau is 0.02, which was used in Michael Eisen’s Cluster/TreeView program.
  • niter (default: 1)
    The number of iterations to be performed.
  • dist (default: 'e', Euclidean distance)
    Defines the distance function to be used (see [sec:distancefunctions]).

This function returns the tuple (clusterid, celldata):

  • clusterid:
    An array with two columns, where the number of rows is equal to the number of items that were clustered. Each row contains the \(x\) and \(y\) coordinates of the cell in the rectangular SOM grid to which the item was assigned.
  • celldata:
    An array with dimensions \(\left(\verb|nxgrid|, \verb|nygrid|, \textrm{number of columns}\right)\) if rows are being clustered, or \(\left(\verb|nxgrid|, \verb|nygrid|, \textrm{number of rows}\right)\) if columns are being clustered. Each element [ix][iy] of this array is a 1D vector containing the gene expression data for the centroid of the cluster in the grid cell with coordinates [ix][iy].

Principal Component Analysis

Principal Component Analysis (PCA) is a widely used technique for analyzing multivariate data. A practical example of applying Principal Component Analysis to gene expression data is presented by Yeung and Ruzzo (2001) @yeung2001.

In essence, PCA is a coordinate transformation in which each row in the data matrix is written as a linear sum over basis vectors called principal components, which are ordered and chosen such that each maximally explains the remaining variance in the data vectors. For example, an \(n \times 3\) data matrix can be represented as an ellipsoidal cloud of \(n\) points in three dimensional space. The first principal component is the longest axis of the ellipsoid, the second principal component the second longest axis of the ellipsoid, and the third principal component is the shortest axis. Each row in the data matrix can be reconstructed as a suitable linear combination of the principal components. However, in order to reduce the dimensionality of the data, usually only the most important principal components are retained. The remaining variance present in the data is then regarded as unexplained variance.

The principal components can be found by calculating the eigenvectors of the covariance matrix of the data. The corresponding eigenvalues determine how much of the variance present in the data is explained by each principal component.

Before applying principal component analysis, typically the mean is subtracted from each column in the data matrix. In the example above, this effectively centers the ellipsoidal cloud around its centroid in 3D space, with the principal components describing the variation of points in the ellipsoidal cloud with respect to their centroid.

The function pca below first uses the singular value decomposition to calculate the eigenvalues and eigenvectors of the data matrix. The singular value decomposition is implemented as a translation in C of the Algol procedure svd @golub1971, which uses Householder bidiagonalization and a variant of the QR algorithm. The principal components, the coordinates of each data vector along the principal components, and the eigenvalues corresponding to the principal components are then evaluated and returned in decreasing order of the magnitude of the eigenvalue. If data centering is desired, the mean should be subtracted from each column in the data matrix before calling the pca routine.

To apply Principal Component Analysis to a rectangular matrix data, use

In [17]:
from Bio.Cluster import pca
columnmean, coordinates, components, eigenvalues = pca(data)
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-17-5edc82169621> in <module>()
      1 from Bio.Cluster import pca
----> 2 columnmean, coordinates, components, eigenvalues = pca(data)

NameError: name 'data' is not defined

This function returns a tuple columnmean, coordinates, components, eigenvalues:

  • columnmean
    Array containing the mean over each column in data.
  • coordinates
    The coordinates of each row in data with respect to the principal components.
  • components
    The principal components.
  • eigenvalues
    The eigenvalues corresponding to each of the principal components.

The original matrix data can be recreated by calculating columnmean +  dot(coordinates, components).

Handling Cluster/TreeView-type files

Cluster/TreeView are GUI-based codes for clustering gene expression data. They were originally written by Michael Eisen while at Stanford University. Bio.Cluster contains functions for reading and writing data files that correspond to the format specified for Cluster/TreeView. In particular, by saving a clustering result in that format, TreeView can be used to visualize the clustering results. We recommend using Alok Saldanha’s http://jtreeview.sourceforge.net/Java TreeView program, which can display hierarchical as well as \(k\)-means clustering results.

An object of the class Record contains all information stored in a Cluster/TreeView-type data file. To store the information contained in the data file in a Record object, we first open the file and then read it:

In [20]:
from Bio import Cluster
handle = open("mydatafile.txt")
record = Cluster.read(handle)
handle.close()
---------------------------------------------------------------------------
FileNotFoundError                         Traceback (most recent call last)
<ipython-input-20-99d49cd96435> in <module>()
      1 from Bio import Cluster
----> 2 handle = open("mydatafile.txt")
      3 record = Cluster.read(handle)
      4 handle.close()

FileNotFoundError: [Errno 2] No such file or directory: 'mydatafile.txt'

This two-step process gives you some flexibility in the source of the data. For example, you can use

In [19]:
import gzip # Python standard library
handle = gzip.open("mydatafile.txt.gz")
---------------------------------------------------------------------------
FileNotFoundError                         Traceback (most recent call last)
<ipython-input-19-42371407c8bc> in <module>()
      1 import gzip # Python standard library
----> 2 handle = gzip.open("mydatafile.txt.gz")

/home/tiago_antao/miniconda/lib/python3.5/gzip.py in open(filename, mode, compresslevel, encoding, errors, newline)
     51     gz_mode = mode.replace("t", "")
     52     if isinstance(filename, (str, bytes)):
---> 53         binary_file = GzipFile(filename, gz_mode, compresslevel)
     54     elif hasattr(filename, "read") or hasattr(filename, "write"):
     55         binary_file = GzipFile(None, gz_mode, compresslevel, filename)

/home/tiago_antao/miniconda/lib/python3.5/gzip.py in __init__(self, filename, mode, compresslevel, fileobj, mtime)
    161             mode += 'b'
    162         if fileobj is None:
--> 163             fileobj = self.myfileobj = builtins.open(filename, mode or 'rb')
    164         if filename is None:
    165             filename = getattr(fileobj, 'name', '')

FileNotFoundError: [Errno 2] No such file or directory: 'mydatafile.txt.gz'

to open a gzipped file, or

In [20]:
import urllib # Python standard library
handle = urllib.urlopen("http://somewhere.org/mydatafile.txt")

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-20-ea06f324a41c> in <module>()
      1 import urllib # Python standard library
----> 2 handle = urllib.urlopen("http://somewhere.org/mydatafile.txt")

AttributeError: module 'urllib' has no attribute 'urlopen'

to open a file stored on the Internet before calling read.

The read command reads the tab-delimited text file mydatafile.txt containing gene expression data in the format specified for Michael Eisen’s Cluster/TreeView program. For a description of this file format, see the manual to Cluster/TreeView. It is available at Michael Eisen’s lab website and at our website.

A Record object has the following attributes:

  • data
    The data array containing the gene expression data. Genes are stored row-wise, while microarrays are stored column-wise.
  • mask
    This array shows which elements in the data array, if any, are missing. If mask[i,j]==0, then data[i,j] is missing. If no data were found to be missing, mask is set to None.
  • geneid
    This is a list containing a unique description for each gene (i.e., ORF numbers).
  • genename
    This is a list containing a description for each gene (i.e., gene name). If not present in the data file, genename is set to None.
  • gweight
    The weights that are to be used to calculate the distance in expression profile between genes. If not present in the data file, gweight is set to None.
  • gorder
    The preferred order in which genes should be stored in an output file. If not present in the data file, gorder is set to None.
  • expid
    This is a list containing a description of each microarray, e.g. experimental condition.
  • eweight
    The weights that are to be used to calculate the distance in expression profile between microarrays. If not present in the data file, eweight is set to None.
  • eorder
    The preferred order in which microarrays should be stored in an output file. If not present in the data file, eorder is set to None.
  • uniqid
    The string that was used instead of UNIQID in the data file.

After loading a Record object, each of these attributes can be accessed and modified directly. For example, the data can be log-transformed by taking the logarithm of record.data.

Calculating the distance matrix

To calculate the distance matrix between the items stored in the record, use

In [21]:
matrix = record.distancematrix()
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-21-dca24589e99d> in <module>()
----> 1 matrix = record.distancematrix()

NameError: name 'record' is not defined

where the following arguments are defined:

  • transpose (default: 0)
    Determines if the distances between the rows of data are to be calculated (transpose==0), or between the columns of data (transpose==1).
  • dist (default: 'e', Euclidean distance)
    Defines the distance function to be used (see [sec:distancefunctions]).

This function returns the distance matrix as a list of rows, where the number of columns of each row is equal to the row number (see section [subsec:distancematrix]).

Calculating the cluster centroids

To calculate the centroids of clusters of items stored in the record, use

In [22]:
cdata, cmask = record.clustercentroids()
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-22-b7634e141823> in <module>()
----> 1 cdata, cmask = record.clustercentroids()

NameError: name 'record' is not defined
  • clusterid (default: None)
    Vector of integers showing to which cluster each item belongs. If clusterid is not given, then all items are assumed to belong to the same cluster.
  • method (default: 'a')
    Specifies whether the arithmetic mean (method=='a') or the median (method=='m') is used to calculate the cluster center.
  • transpose (default: 0)
    Determines if the centroids of the rows of data are to be calculated (transpose==0), or the centroids of the columns of data (transpose==1).

This function returns the tuple cdata, cmask; see section [subsec:clustercentroids] for a description.

Calculating the distance between clusters

To calculate the distance between clusters of items stored in the record, use

In [23]:
distance = record.clusterdistance()
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-23-d87103d6884f> in <module>()
----> 1 distance = record.clusterdistance()

NameError: name 'record' is not defined

where the following arguments are defined:

  • index1 (default: 0)
    A list containing the indices of the items belonging to the first cluster. A cluster containing only one item \(i\) can be represented either as a list [i], or as an integer i.
  • index2 (default: 0)
    A list containing the indices of the items belonging to the second cluster. A cluster containing only one item \(i\) can be represented either as a list [i], or as an integer i.
  • method (default: 'a')
    Specifies how the distance between clusters is defined:
    • 'a': Distance between the two cluster centroids (arithmetic mean);
    • 'm': Distance between the two cluster centroids (median);
    • 's': Shortest pairwise distance between items in the two clusters;
    • 'x': Longest pairwise distance between items in the two clusters;
    • 'v': Average over the pairwise distances between items in the two clusters.
  • dist (default: 'e', Euclidean distance)
    Defines the distance function to be used (see [sec:distancefunctions]).
  • transpose (default: 0)
    If transpose==0, calculate the distance between the rows of data. If transpose==1, calculate the distance between the columns of data.

Performing hierarchical clustering

To perform hierarchical clustering on the items stored in the record, use

In [24]:
tree = record.treecluster()
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-24-455e50ad0270> in <module>()
----> 1 tree = record.treecluster()

NameError: name 'record' is not defined

where the following arguments are defined:

  • transpose (default: 0)
    Determines if rows (transpose==0) or columns (transpose==1) are to be clustered.
  • method (default: 'm')
    defines the linkage method to be used:
    • method=='s': pairwise single-linkage clustering
    • method=='m': pairwise maximum- (or complete-) linkage clustering
    • method=='c': pairwise centroid-linkage clustering
    • method=='a': pairwise average-linkage clustering
  • dist (default: 'e', Euclidean distance)
    Defines the distance function to be used (see [sec:distancefunctions]).
  • transpose
    Determines if genes or microarrays are being clustered. If transpose==0, genes (rows) are being clustered. If transpose==1, microarrays (columns) are clustered.

This function returns a Tree object. This object contains \(\left(\textrm{number of items} - 1\right)\) nodes, where the number of items is the number of rows if rows were clustered, or the number of columns if columns were clustered. Each node describes a pairwise linking event, where the node attributes left and right each contain the number of one item or subnode, and distance the distance between them. Items are numbered from 0 to \(\left(\textrm{number of items} - 1\right)\), while clusters are numbered -1 to \(-\left(\textrm{number of items}-1\right)\).

Performing \(k\)-means or \(k\)-medians clustering

To perform \(k\)-means or \(k\)-medians clustering on the items stored in the record, use

In [25]:
clusterid, error, nfound = record.kcluster()

---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-25-bab4db892231> in <module>()
----> 1 clusterid, error, nfound = record.kcluster()

NameError: name 'record' is not defined

where the following arguments are defined:

  • nclusters (default: 2)
    The number of clusters \(k\).
  • transpose (default: 0)
    Determines if rows (transpose is 0) or columns (transpose is 1) are to be clustered.
  • npass (default: 1)
    The number of times the \(k\)-means/-medians clustering algorithm is performed, each time with a different (random) initial condition. If initialid is given, the value of npass is ignored and the clustering algorithm is run only once, as it behaves deterministically in that case.
  • method (default: a)
    describes how the center of a cluster is found:
    • method=='a': arithmetic mean (\(k\)-means clustering);
    • method=='m': median (\(k\)-medians clustering).

    For other values of method, the arithmetic mean is used.

  • dist (default: 'e', Euclidean distance)
    Defines the distance function to be used (see [sec:distancefunctions]).

This function returns a tuple (clusterid, error, nfound), where clusterid is an integer array containing the number of the cluster to which each row or cluster was assigned, error is the within-cluster sum of distances for the optimal clustering solution, and nfound is the number of times this optimal solution was found.

Calculating a Self-Organizing Map

To calculate a Self-Organizing Map of the items stored in the record, use

In [26]:
clusterid, celldata = record.somcluster()

---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-26-337f60498164> in <module>()
----> 1 clusterid, celldata = record.somcluster()

NameError: name 'record' is not defined

where the following arguments are defined:

  • transpose (default: 0)
    Determines if rows (transpose is 0) or columns (transpose is 1) are to be clustered.
  • nxgrid, nygrid (default: 2, 1)
    The number of cells horizontally and vertically in the rectangular grid on which the Self-Organizing Map is calculated.
  • inittau (default: 0.02)
    The initial value for the parameter \(\tau\) that is used in the SOM algorithm. The default value for inittau is 0.02, which was used in Michael Eisen’s Cluster/TreeView program.
  • niter (default: 1)
    The number of iterations to be performed.
  • dist (default: 'e', Euclidean distance)
    Defines the distance function to be used (see [sec:distancefunctions]).

This function returns the tuple (clusterid, celldata):

  • clusterid:
    An array with two columns, where the number of rows is equal to the number of items that were clustered. Each row contains the \(x\) and \(y\) coordinates of the cell in the rectangular SOM grid to which the item was assigned.
  • celldata:
    An array with dimensions \(\left(\verb|nxgrid|, \verb|nygrid|, \textrm{number of columns}\right)\) if rows are being clustered, or \(\left(\verb|nxgrid|, \verb|nygrid|, \textrm{number of rows}\right)\) if columns are being clustered. Each element [ix][iy] of this array is a 1D vector containing the gene expression data for the centroid of the cluster in the grid cell with coordinates [ix][iy].

Saving the clustering result

To save the clustering result, use

In [27]:
record.save(jobname, geneclusters, expclusters)

---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-27-7f22d4d413d4> in <module>()
----> 1 record.save(jobname, geneclusters, expclusters)

NameError: name 'record' is not defined

where the following arguments are defined:

  • jobname
    The string jobname is used as the base name for names of the files that are to be saved.
  • geneclusters
    This argument describes the gene (row-wise) clustering result. In case of \(k\)-means clustering, this is a 1D array containing the number of the cluster each gene belongs to. It can be calculated using kcluster. In case of hierarchical clustering, geneclusters is a Tree object.
  • expclusters
    This argument describes the (column-wise) clustering result for the experimental conditions. In case of \(k\)-means clustering, this is a 1D array containing the number of the cluster each experimental condition belongs to. It can be calculated using kcluster. In case of hierarchical clustering, expclusters is a Tree object.

This method writes the text file jobname.cdt, jobname.gtr, jobname.atr, jobname*.kgg, and/or jobname*.kag for subsequent reading by the Java TreeView program. If geneclusters and expclusters are both None, this method only writes the text file jobname.cdt; this file can subsequently be read into a new Record object.

Example calculation

This is an example of a hierarchical clustering calculation, using single linkage clustering for genes and maximum linkage clustering for experimental conditions. As the Euclidean distance is being used for gene clustering, it is necessary to scale the node distances genetree such that they are all between zero and one. This is needed for the Java TreeView code to display the tree diagram correctly. To cluster the experimental conditions, the uncentered correlation is being used. No scaling is needed in this case, as the distances in exptree are already between zero and two. The example data cyano.txt can be found in the data subdirectory.

In [28]:
from Bio import Cluster
handle = open("cyano.txt")
record = Cluster.read(handle)
handle.close()
genetree = record.treecluster(method='s')
genetree.scale()
exptree = record.treecluster(dist='u', transpose=1)
record.save("cyano_result", genetree, exptree)

---------------------------------------------------------------------------
FileNotFoundError                         Traceback (most recent call last)
<ipython-input-28-80f17db22da5> in <module>()
      1 from Bio import Cluster
----> 2 handle = open("cyano.txt")
      3 record = Cluster.read(handle)
      4 handle.close()
      5 genetree = record.treecluster(method='s')

FileNotFoundError: [Errno 2] No such file or directory: 'cyano.txt'

This will create the files cyano_result.cdt, cyano_result.gtr, and cyano_result.atr.

Similarly, we can save a \(k\)-means clustering solution:

In [29]:
from Bio import Cluster
handle = open("cyano.txt")
record = Cluster.read(handle)
handle.close()
(geneclusters, error, ifound) = record.kcluster(nclusters=5, npass=1000)
(expclusters, error, ifound) = record.kcluster(nclusters=2, npass=100, transpose=1)
record.save("cyano_result", geneclusters, expclusters)

---------------------------------------------------------------------------
FileNotFoundError                         Traceback (most recent call last)
<ipython-input-29-99295949c49b> in <module>()
      1 from Bio import Cluster
----> 2 handle = open("cyano.txt")
      3 record = Cluster.read(handle)
      4 handle.close()
      5 (geneclusters, error, ifound) = record.kcluster(nclusters=5, npass=1000)

FileNotFoundError: [Errno 2] No such file or directory: 'cyano.txt'